63 research outputs found

    Attribute-Guided Face Generation Using Conditional CycleGAN

    Full text link
    We are interested in attribute-guided face generation: given a low-res face input image, an attribute vector that can be extracted from a high-res image (attribute image), our new method generates a high-res face image for the low-res input that satisfies the given attributes. To address this problem, we condition the CycleGAN and propose conditional CycleGAN, which is designed to 1) handle unpaired training data because the training low/high-res and high-res attribute images may not necessarily align with each other, and to 2) allow easy control of the appearance of the generated face via the input attributes. We demonstrate impressive results on the attribute-guided conditional CycleGAN, which can synthesize realistic face images with appearance easily controlled by user-supplied attributes (e.g., gender, makeup, hair color, eyeglasses). Using the attribute image as identity to produce the corresponding conditional vector and by incorporating a face verification network, the attribute-guided network becomes the identity-guided conditional CycleGAN which produces impressive and interesting results on identity transfer. We demonstrate three applications on identity-guided conditional CycleGAN: identity-preserving face superresolution, face swapping, and frontal face generation, which consistently show the advantage of our new method.Comment: ECCV 201

    Investigate the interaction between dark matter and dark energy

    Get PDF
    In this paper we investigate the interaction between dark matter and dark energy by considering two different interacting scenarios, i.e. the cases of constant interaction function and variable interaction function. By fitting the current observational data to constrain the interacting models, it is found that the interacting strength is non-vanishing, but weak for the case of constant interaction function, and the interaction is not obvious for the case of variable interaction function. In addition, for seeing the influence from interaction we also investigate the evolutions of interaction function, effective state parameter for dark energy and energy density of dark matter. At last some geometrical quantities in the interacting scenarios are discussed.Comment: 14 pages, 6 figure

    Influence of Waterside Buildings’ Layout on Wind Environment and the Relation with Design Based on a Case Study of the She Kou Residential District

    Get PDF
    It is important to improve residential thermal comfort in the high dense cities, in which wind environment is crucial. Waterside buildings take an advantage of micro-hydrological-climate in summer that should be used to enhance residential thermal comfort especially in the subtropical region. In order to propose design approaches according to the outdoor thermal comfort of the waterside residential, a case study of Shenzhen She Kou residential district has been made. It focused on various factors that could have influence on wind environment for improving thermal comfort. Using wind velocity ratio (ΔRi) criterion, factors of building development volume, building direction and layout pattern, open space arrangement etc. have been broadly explored using FLUENT simulation. To planning parameters, the Floor Area Ratio (FAR) is significantly influence wind environment, the smaller FAR is better. To the vertical layout of the buildings, multi-storey layout and multi-storey & sub high-rise mixed layout would provide better wind environment. To the horizontal layout, the determinant is better than the peripheral. Other factors such as the buildings’ direction towards the road, buildings’ height, and open space setting, have influence on wind environment yet. In general, the more benefit of design layout for wind breezing, the better wind environment it could ge

    Abdominal multi-organ segmentation in CT using Swinunter

    Full text link
    Abdominal multi-organ segmentation in computed tomography (CT) is crucial for many clinical applications including disease detection and treatment planning. Deep learning methods have shown unprecedented performance in this perspective. However, it is still quite challenging to accurately segment different organs utilizing a single network due to the vague boundaries of organs, the complex background, and the substantially different organ size scales. In this work we used make transformer-based model for training. It was found through previous years' competitions that basically all of the top 5 methods used CNN-based methods, which is likely due to the lack of data volume that prevents transformer-based methods from taking full advantage. The thousands of samples in this competition may enable the transformer-based model to have more excellent results. The results on the public validation set also show that the transformer-based model can achieve an acceptable result and inference time.Comment: 8pages. arXiv admin note: text overlap with arXiv:2201.01266 by other author

    Data-Centric Diet: Effective Multi-center Dataset Pruning for Medical Image Segmentation

    Full text link
    This paper seeks to address the dense labeling problems where a significant fraction of the dataset can be pruned without sacrificing much accuracy. We observe that, on standard medical image segmentation benchmarks, the loss gradient norm-based metrics of individual training examples applied in image classification fail to identify the important samples. To address this issue, we propose a data pruning method by taking into consideration the training dynamics on target regions using Dynamic Average Dice (DAD) score. To the best of our knowledge, we are among the first to address the data importance in dense labeling tasks in the field of medical image analysis, making the following contributions: (1) investigating the underlying causes with rigorous empirical analysis, and (2) determining effective data pruning approach in dense labeling problems. Our solution can be used as a strong yet simple baseline to select important examples for medical image segmentation with combined data sources.Comment: Accepted by ICML workshops 202

    Learning to In-paint: Domain Adaptive Shape Completion for 3D Organ Segmentation

    Full text link
    We aim at incorporating explicit shape information into current 3D organ segmentation models. Different from previous works, we formulate shape learning as an in-painting task, which is named Masked Label Mask Modeling (MLM). Through MLM, learnable mask tokens are fed into transformer blocks to complete the label mask of organ. To transfer MLM shape knowledge to target, we further propose a novel shape-aware self-distillation with both in-painting reconstruction loss and pseudo loss. Extensive experiments on five public organ segmentation datasets show consistent improvements over prior arts with at least 1.2 points gain in the Dice score, demonstrating the effectiveness of our method in challenging unsupervised domain adaptation scenarios including: (1) In-domain organ segmentation; (2) Unseen domain segmentation and (3) Unseen organ segmentation. We hope this work will advance shape analysis and geometric learning in medical imaging

    Label-Assemble: Leveraging Multiple Datasets with Partial Labels

    Full text link
    The success of deep learning relies heavily on large and diverse datasets with extensive labels, but we often only have access to several small datasets associated with partial labels. In this paper, we start a new initiative, "Label-Assemble", that aims to unleash the full potential of partially labeled data from an assembly of public datasets. Specifically, we introduce a new dynamic adapter to encode different visual tasks, which addresses the challenges of incomparable, heterogeneous, or even conflicting labeling protocols. We also employ pseudo-labeling and consistency constraints to harness data with missing labels and to mitigate the domain gap across datasets. From rigorous evaluations on three natural imaging and six medical imaging tasks, we discover that learning from "negative examples" facilitates both classification and segmentation of classes of interest. This sheds new light on the computer-aided diagnosis of rare diseases and emerging pandemics, wherein "positive examples" are hard to collect, yet "negative examples" are relatively easier to assemble. Apart from exceeding prior arts in the ChestXray benchmark, our model is particularly strong in identifying diseases of minority classes, yielding over 3-point improvement on average. Remarkably, when using existing partial labels, our model performance is on-par with that using full labels, eliminating the need for an additional 40% of annotation costs. Code will be made available at https://github.com/MrGiovanni/LabelAssemble
    • …
    corecore